@Cofty: who unplugs what exactly?
First, define what the topic is we are talking about, people here seem to think that AGI and ChatGPT are nearly equivalent, they’re not even in the same realm. ChatGPT is a simple model, it can fit in a very small memory footprint. There are models that are 100-200x larger than GPT and even 1000x larger is well within the realm of anyone with $100k to spare for the hardware.
If you make the case that GPT models are ‘evil’ and you need to halt their progression, there are much better algorithms currently in use that are much more useful and powerful.
I work on “AI” models (advanced self-driving filters, which we call classifiers, not AI) that can do automated path tracing and activation measurements in brains, this allows us to model the movement of blood and relationship of thousands of individual ‘nodes’ in the brain.
China is using similar models on a city and perhaps even country wide scale in a pipeline with widespread CCTV to classify individual nodes aka people, their movements and potential relationships. London has even more CCTV and uses something similar to classify threats. So the US agrees to put a pause on GPT, which is a simplified model of what we use but applied to language, do you think China will agree, the U.K.? The military won’t have some secret program that continues the research?
In the end, these AI models are just computer programs, we can disassemble and understand them, we’re just too lazy to actually do this because for the most part they are very wasteful. Biological creatures are much better optimized and more well róunded. I do think we can eventually approximate a simple creature like a dog, but for human level intelligence and beyond, we first need to understand what that even means before we can build a program that goes into that direction.